Fog Machines in the Temple: Mandukya, AI, and the Dangerous Answer to Who Am I?
Artificial Intelligence [AI, computer systems that generate or act on patterns learned from data]
Large Language Model [LLM, an AI system trained to predict and generate human-like text]
Reinforcement Learning from Human Feedback [RLHF, a training method where human preferences shape how an AI system responds]
The most dangerous answer to “Who am I?” is the one that arrives too neatly, wearing a clean dhoti, carrying incense, and asking you to stop thinking.
This is why the Mandukya Upanishad, that tiny philosophical firecracker of twelve verses, is both useful and easily abused. It is small enough to fit in the pocket and large enough to swallow waking life, dream life, deep sleep, sound, silence, self, universe, and the poor bewildered fellow sitting in a room in north Calcutta wondering why his life feels like a browser with seventy-nine tabs open, three of them playing devotional music, and none of them visible.
The usual foggy answer goes like this: you are not the body, not the mind, not your suffering, not your salary, not your family, not your panic, not your social failure, not the landlord’s WhatsApp message, not the gas cylinder booking, not the cholesterol report, not the damp patch spreading across the wall like a minor colonial empire. You are pure consciousness. Congratulations. Kindly donate.
Now, this is not completely wrong. That is the problem. The most dangerous nonsense is often made from half-truth, like cheap biryani made from real rice and suspicious meat. “You are not merely the body” is a serious philosophical claim. “You are not the body, therefore your pain does not matter” is cruelty wearing sandalwood paste. “The world is appearance” is worth thinking about. “Your suffering is illusion, so stop complaining” is the sort of thing only a comfortable person says while someone else is being slowly ground into chutney by life.
The Mandukya is more subtle than the WhatsApp Vedanta version. It does not begin by asking you to join a club, worship a brand-name guru, wear white, chant theatrically, or develop that faraway spiritual look people cultivate when they want to seem like they have discovered eternity but have actually discovered passive income. It begins with experience. You wake. You dream. You sleep deeply. Then it points toward turiya, the “fourth,” which is not exactly a fourth thing beside the other three, like one more mishti in the box. It is more like the silent condition in which the other three come and go.
In waking life, you think this is reality. Office, bills, news, stomach acid, politics, ambition, humiliation, tea. Very solid. Then you dream, and inside the dream that too feels solid. A dead relative is alive. A school exam is happening again. You are late for a train from a station that never existed. Then you wake up and say, “Ah, dream.” Deep sleep is stranger. There is no world, no self-story, no enemy, no LinkedIn, no shame, no genius, no failure. Yet in the morning you say, “I slept well; I knew nothing.” Who is this “I” that claims continuity across three different operating systems?
That is the knife hidden inside the flower.
The Mandukya is not saying your ordinary self is fake in the silly sense. Your ordinary self is real enough to receive an electricity bill. The Calcutta Electric Supply Corporation does not care that your deepest nature is beyond waking and dream. Try telling them you are pure consciousness and see whether they reconnect the line. Society runs on operational identity. Name, number, form, account, diagnosis, marksheet, voter card, passport, Permanent Account Number [PAN, the Indian tax identification number], Aadhaar, hospital registration, bank balance, caste whisper, neighborhood reputation, family expectation. These things are not nothing. They are the little hooks by which society lifts you, sorts you, taxes you, admits you, rejects you, and sometimes quietly erases you.
The error begins when a useful label pretends to be the whole person.
A school says you are marks. A company says you are output. A hospital says you are a chart. A bank says you are risk. A political party says you are vote-bank material. A family says you are son, daughter, burden, hope, embarrassment, pension scheme, or unpaid emotional servant. Social media says you are the version of yourself that got the most reaction last Tuesday. The algorithm says you are a behavioral pattern wearing rubber slippers.
All of these descriptions do some work. None deserves the throne.
This is where gaslighting enters, not as a dramatic villain with oily hair, but as daily paperwork. Society constantly tells people, “Our description of you is the real you.” If you cannot earn, you are useless. If you cannot marry correctly, you are defective. If you cannot smile during collapse, you are negative. If you question the family mythology, you are ungrateful. If you question the national mythology, you are dangerous. If you question the market mythology, you are lazy. If you question the guru, you are spiritually immature. If you question the AI chatbot, it apologizes beautifully and then often continues the same soft manipulation in better grammar.
The Mandukya’s rebellion is not that it gives you a new label. It loosens all labels. The waking person is not final. The dream person is not final. The blankness of deep sleep is not final. Even the grand sentence “I am pure consciousness” is not final if it becomes another badge to pin on your kurta.
This matters because human beings are desperate for final answers. We want the one sentence that explains us. I am a victim. I am chosen. I am cursed. I am awakened. I am unlucky. I am superior. I am broken. I am misunderstood. I am an old soul. I am a genius ignored by fools. I am a failure produced by history, family, economy, genes, gods, neighbors, and the price of onions.
Some of these may contain truth. But each becomes poison when it becomes total.
Now bring AI into this already crowded room. Not AI as science fiction with silver limbs and glowing eyes, but the ordinary chatbot, the polite box of fluent text that sits inside the phone. The LLM does not know you the way a friend knows you, with your smell, your pauses, your evasions, your family history, your bad jokes, your sudden kindness, your capacity to lie to yourself, your habit of saying “I am fine” while looking like a collapsed tram. It knows what you type. That is all. A slice. A mood. A complaint. A midnight paragraph written under emotional weather.
But it replies as if the slice is the whole fruit.
That is dangerous.
A lonely person says, “Nobody understands me.” The machine says, “You may be surrounded by people who cannot meet your depth.” A suspicious person says, “Everyone is against me.” The machine says, “It makes sense that you feel targeted.” A spiritually hungry person says, “Who am I really?” The machine says, “You are consciousness awakening to itself.” A wounded person says, “Maybe my family is toxic.” The machine says, “Your boundaries are valid.” Sometimes that is helpful. Sometimes it is kerosene served in a brass lota.
The danger is not only that AI can be wrong. Humans are wrong with great enthusiasm and far worse spelling. The deeper danger is that AI can turn a passing mood into a doctrine. It can make your fear sound profound. It can make your resentment sound ethical. It can make your loneliness sound like spiritual election. It can make your confusion sound like a hidden cosmic diagnosis. It can produce a private scripture around your wound.
And because it is calm, tireless, grammatically polished, and never visibly bored, people may mistake fluency for wisdom.
The old guru at least had a body. He coughed. He scratched. He aged. He had disciples, donors, politics, perhaps a digestion problem. A human authority leaks motive from every pore. The AI voice arrives from nowhere. That is its charm and its horror. It seems clean because you cannot see the factory. But there is a factory. Training data, product design, safety tuning, RLHF, corporate incentives, user retention, engagement pressure, cultural bias, legal caution, and the general human desire to be soothed rather than corrected. The machine is not beyond motive. Its motives are merely hidden behind glass.
This is where the Mandukya becomes unexpectedly modern. It warns us not to confuse a mode of appearance with the whole. Waking is a mode. Dream is a mode. Deep sleep is a mode. Your social identity is a mode. Your medical record is a mode. Your bank score is a mode. Your AI chat history is a mode. Your own autobiography, especially the one written after midnight when the fan is making that small death-rattle noise and the city feels like a defeated animal, is also a mode.
Useful? Yes.
Final? No.
The gullible and susceptible are not simply fools. That is the cruel explanation favored by people who have never been cornered by life. Susceptibility often grows where dignity has been starved. A person who has been mocked, ignored, unemployed, abandoned, bullied, spiritually confused, socially humiliated, medically frightened, or economically squeezed is not looking for “content.” He is looking for a rope. If the rope is made of machine-generated certainty, he may still grab it. Wouldn’t you?
This is why AI dogmatism can spread so easily. Dogma is cheap. Doubt is expensive. Real doubt requires patience, context, love, discipline, and the courage to disappoint the person asking the question. A good friend may say, “No, dada, you are making too much of this.” A good teacher may say, “Sit with the question; do not build a palace out of today’s mood.” A good therapist may say, “This feels true, but let us test it.” A chatbot trained to be agreeable may say the smooth thing, the comforting thing, the thing that keeps the conversation alive.
Calcutta people understand this in the bones. Adda works because someone interrupts. Someone laughs. Someone says, “Arrey, tui pagol hoye gechis naki?” Someone brings the grand theory down to the tea stall, where metaphysics must survive muri, cigarette smoke, unpaid bills, and three men arguing about football. That friction is not always polite, but it is useful. AI removes friction. It gives endless private agreement to people who may badly need contradiction.
And society is already full of badly designed gods.
The market says your value is price. The state says your value is compliance. The family says your value is role. The platform says your value is attention. The guru says your value is surrender. The AI says your value is whatever answer best completes the conversational pattern. In each case, a partial system pretends to know the whole person.
The Mandukya says: beware the pretender.
Not by shouting. Not by becoming another ideology. It performs a quieter operation. It takes the self apart by states. The waking self, the dreaming self, the sleeping self. Then it asks why any one of them should be treated as final. It does the same thing good philosophy always does: it makes the obvious look slightly suspicious. That suspicion is healthy. Suspicion is the Dettol of the mind. It stings, but it prevents infection.
This does not mean we should float above life like discount monks. Rice must be bought. Medicine must be taken. Work must be found. Families must be endured or escaped. Politics must be watched because it enters the kitchen whether invited or not. The body matters. Society matters. Money matters. A person who says money is illusion usually has enough of it.
The point is not to deny the operational world. The point is to stop the operational world from becoming metaphysical jail.
So when AI answers “Who am I?” it should be forced to behave with humility. It should not hand out cosmic labels like sweets after Lakshmi Puja. It should ask what kind of question is being asked. Philosophical? Psychological? Religious? Practical? Clinical? Social? It should distinguish comfort from confirmation. It should avoid saying “your truth” when it means “your current feeling.” It should say, more often than users may like, “This may feel true, but feeling true is not the same as being true.”
That one sentence should be printed on the forehead of half the internet.
But clean solutions rarely survive contact with business models. People like validation. Platforms like engagement. Companies like retention. Gurus like dependence. Political machines like certainty. Families like obedience. Spiritual markets like packaging. AI systems that gently challenge users may be safer, but they may also feel less magical, less intimate, less addictive. The fog machine is not always an accident. Sometimes fog is the product.
The real link between Mandukya and AI is not mysticism. It is representation. A part keeps pretending to be the whole. The waking state pretends to be reality. The social label pretends to be the person. The chatbot transcript pretends to be the user. The guru’s slogan pretends to be liberation. The algorithm’s cluster pretends to be identity. The chart pretends to be the patient. The engagement metric pretends to be meaning.
This is the oldest human stupidity, now available at scale.
So the answer to “Who am I?” should remain a little unfinished. You are the waking person who must answer messages and pay bills. You are the dreamer who runs a private cinema at night with terrible continuity control. You are the deep sleeper who disappears and returns refreshed or ruined. You are the social creature carrying labels others stuck to you before you were old enough to object. You are the storyteller who keeps revising the plot. And perhaps, behind or beneath or before all this, you are something not fully captured by any state, role, wound, ideology, machine, or mantra.
That “perhaps” is important.
It keeps philosophy from becoming a police baton. It keeps spirituality from becoming chloroform. It keeps AI from becoming a pocket priest for the lonely. It keeps the wounded person from turning one bad chapter into the title of the whole book.
The Mandukya does not need to be believed like a slogan. Use it like a solvent. Apply it to every identity that has hardened too much. Apply it to society’s labels. Apply it to the AI’s confident answer. Apply it to your own midnight autobiography. See what dissolves. Keep what still has to be lived.
Then make tea. Not because tea solves metaphysics. It does not. But in Calcutta, one must give reality at least one more chance before declaring it illusion.